22 research outputs found

    Using Decision Trees for Coreference Resolution

    Full text link
    This paper describes RESOLVE, a system that uses decision trees to learn how to classify coreferent phrases in the domain of business joint ventures. An experiment is presented in which the performance of RESOLVE is compared to the performance of a manually engineered set of rules for the same task. The results show that decision trees achieve higher performance than the rules in two of three evaluation metrics developed for the coreference task. In addition to achieving better performance than the rules, RESOLVE provides a framework that facilitates the exploration of the types of knowledge that are useful for solving the coreference problem.Comment: 6 pages; LaTeX source; 1 uuencoded compressed EPS file (separate); uses ijcai95.sty, named.bst, epsf.tex; to appear in Proc. IJCAI '9

    Cognition, Computers, and Car Bombs: How Yale Prepared Me for the 90's

    No full text
    early writings on artificial intelligence (Feigenbaum and Feldman 1963). It was here that I learned about a community of people who were trying to unravel the mysteries of human cognition by playing around with computers. This seemed a lot more interesting than Riemannian manifolds and Hausdorff spaces, or maybe I was just getting tired of all that time on the subway. One way or another, I decided to apply to a graduate program in computer science just in case there was some stronger connection between FORTRAN and human cognition than I had previously suspected. When Yale accepted me, I decided to throw all caution to the wind and trust the admissions committee. I packed up my basenji and set out for Yale in the summer of 1974 with a sense of grand adventure. I was moving toward light and truth, and my very first full screen text editor. As luck would have it, Professor Roger Schank, a specialist in artificial intelligence (AI) from Stanford, was also moving to Yale that same summer.

    NARRATIVE COMPLEXITY BASED ON SUMMARIZATION

    No full text
    - Abstract

    La réponse à des questions dans la mise en mémoire de récits

    No full text
    Dyer Michael G., Lehnert Wendy G. La réponse à des questions dans la mise en mémoire de récits. In: Bulletin de psychologie, tome 35 n°356, 1982. Langage et compréhension. pp. 799-813

    Dictionary construction by domain experts

    No full text
    Sites participating in the recent message understanding conferences have increasingly focused their research on developing methods for automated knowledge acquisition and tools for human-assisted knowledge engineering. However, it is important to remember that the ultimate users of these tools will be domain experts, not natural language processing researchers. Domain experls have extensive knowledge about the task and the domain, but will have little or no background in linguistics or text processing. Tools that assume familiarity with computational linguistics will be of limited use in practical development scenarios. To investigate practical dictionary construction, we conducted an experiment with government analysts. We wanted to demonstrate that domain experts with n

    A Domain Independent Semantic Parser

    No full text
    to a type of possible input word (i.e., they are types taken from one of the system knowledge bases). For example, the frame associated with the word "study " indicates that the AGENT must be human and that it can take a THEME (some abstract or physical object) and a LOCATION (a physical place). The AGENT is required while THEME and LOCATION are optional. Once a frame has been chosen, bottom-up processing attempts to fill out the frame by fitting the input words into the appropriate slots. A frame is considered satisfactory if all of its obligatory roles can be filled. A final source of ambiguity must be dealt with. So far we have considered words of the input modifying the verb by planning a role with respect to it. We must also consider the possibility of one word of the input serving to modify the meaning of another input word. Because a certain word can only modify certain types of other words, we attach words which can be modifiers onto the words of types that they can possibly modify. For example, the word "red " is attached to the type PHY-OBJ (PHYsical-OBJect) in the object knowledge base. Therefore, the word "red " can modify the word "house " since "house " is a PHY-OBJ, but cannot modify the word "idea " since "idea " is not. Using the modifier information the system attempts to associate any modifier words with the words of the input that they modify. In this way, the "red " from the input will be attached as a modifier to the word "house". Using the above processing and knowledge sources the parser will come up with two possible interpretations of the input: "John studies at the red house " (here the red house is taken as a location) and "John studies the red house " (here the red house is taken as the thing that John is studying). Notice both interpretations account for all input words and fill all obligatory roles associated with the chosen verb. Current Functionality The current system is able to recognize different uses of a verb depending on the modifiers present in th
    corecore